Welcome to your first Learning Lab Case Study! The case study activities included in each learning lab demonstrate how key data-intensive research workflow processes as featured in exemplary STEM education research studies can be implemented in R. Case studies also provide a holistic setting to explore important foundational LA topics integral to data analysis such as reproducible research, use of APIs, and ethical use of educational data.
This orientation case study is just a warm-up activity to introduce you to R Markdown, which is heavily integrated into each LASER Learning Labs. You may have used R Markdown before - or you may not have! Either is fine as this task will be designed with the assumption that you have not used R Markdown before. In the context of doing so, we’ll focus on the following tasks in this interactive coding case study:
What you are working in now is an R Markdown file as indicated by the .Rmd file name extension and which stores information in plain text markdown syntax. R Markdown documents are fully reproducible and use a productive notebook interface to combine narrative text and “chunks” of code to produce a range of static or dynamic outputs formats including: HTML, PDF, MS Word, HTML5 slides, Tufte-style handouts, books, dashboards, shiny applications, scientific articles, websites, and more.
There are two keys to your use of R Markdown for this activity:
{}, as well as a set of
buttons in the upper right corner for running the code.Click the green arrow button on the right side of the code chunk to
run the R code and view the image file name laser-cycle.png
stored in the img folder in your files pane.
knitr::include_graphics("img/laser-cycle.png")
You may have noticed the words in this diagram correspond to the sections outlined at the beginning of this documents. These terms, or processes, are part of a framework called the data-intensive research workflow and comes from the book Learning Analytics Goes to School (Krumm, Means, and Bienkowski 2018). You can check that out, but don’t feel any need to dive deep for now - we’ll be spending more time on this throughout the week. For now, know that this document and all of our LASER Lab case studies are organized around these five components.
Now Let’s get started!
First and foremost, data-intensive research involves defining and refining a research question and developing an understanding of where your data comes from (Krumm, Means, and Bienkowski 2018). This part of the process also involves setting up a reproducible research (Gandrud 2013) so your work can be understood and replicated by other research. For now, we’ll focus on just a few parts of this process, diving in much more deeply into these components in later learning labs.
As highlighted in Chapter 6 of Data Science in Education Using R (Estrellado et al. 2020), one of the first steps of every workflow should be to set up your “Project” within RStudio. Recall that:
A Project is the home for all of the files, images, reports, and code that are used in any given project
Since we are working in Posit Cloud with a R project cloned from
GitHub, an R project has already been set up for you as indicated by
the .Rproj file in the main directory. Locate the Files tab
lower right hand window pane and see if you can find this file. With a
project already set up for us, we will instead focus on loading the
required packages we’ll need for analysis.
Packages, sometimes referred to as libraries, are shareable collections of R code that can contain functions, data, and/or documentation and extend the functionality of R. You can always check to see which packages have already been installed and loaded into RStudio by looking at the Packages tab in the same pane as the Files tab. Click the packages tab to see which packages have already been installed for this project.
One package that we’ll be using extensively in our learning labs is the {tidyverse} package. The {tidyverse} is actually a collection of R packages designed for wrangling and exploring data (sound familiar?) and which all share an underlying design philosophy, grammar, and data structures. These shared features are sometimes referred to as “tidy data principles” (Wickham and Grolemund 2016).
To load the tidyverse, we’ll use the library() function.
Go ahead and run the code chunk below:
library(tidyverse)
## ── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
## ✔ dplyr 1.1.2 ✔ readr 2.1.4
## ✔ forcats 1.0.0 ✔ stringr 1.5.0
## ✔ ggplot2 3.4.2 ✔ tibble 3.2.1
## ✔ lubridate 1.9.2 ✔ tidyr 1.3.0
## ✔ purrr 1.0.1
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
Please do not worry if you saw a number of messages: those probably mean that the tidyverse loaded just fine. If you see an error, though, try to interpret or search via your search engine the contents of the error, or reach out to us for assistance.
As we noted in the beginning, these case studies are meant to be interactive. Throughout each case study, you’ll see “Your Turn” headings like the one above that will ask to you apply some of your R skills to help with the analysis. These Your Turns are intended to help you practice newly introduced functions or R code and reinforce R skills you have already learned.
Use the code chunk below to load the {skimr} package into our
environment as well. Skimr is a handy
package that provides summary statistics that you can skim quickly to
understand and your data and see what may be missing. We’ll be using
this later in the Explore section of this case study.
library(skimr)
The data we’ll explore in this case study were originally collected for a research study, which utilized a number of different data sources to understand students’ course-related motivation. These courses were designed and taught by instructors through a state-wide online course provider designed to supplement—but not replace—students’ enrollment in their local school. The data used in this case study has already been “wrangled” quite a bit, but the original datasets included:
A self-report survey assessing three aspects of students’ motivation
Log-trace data, such as data output from the learning management system (LMS)
Discussion board data
Academic achievement data
Next, we’ll load our data - specifically, a CSV text file, the kind
that you can export from Microsoft Excel or Google Sheets - into R,
using the read_csv() function in the next chunk.
Clicking the green arrow runs the code; do that next to read the
sci-online-classes.csv file stored in the data
folder of your R project:
sci_data <- read_csv("data/sci-online-classes.csv")
## Rows: 603 Columns: 30
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (6): course_id, subject, semester, section, Gradebook_Item, Gender
## dbl (23): student_id, total_points_possible, total_points_earned, percentage...
## lgl (1): Grade_Category
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
Nice work! You should now see a new data “object” named
sci_data saved in your Environment pane. Try clicking on it
and see what happens!
Now let’s learn another way to inspect our data. Run the next chunk and look at the results, tabbing left or right with the arrows, or scanning through the rows by clicking the numbers at the bottom of the pane with the print-out of the data you loaded:
sci_data
## # A tibble: 603 × 30
## student_id course_id total_points_possible total_points_earned
## <dbl> <chr> <dbl> <dbl>
## 1 43146 FrScA-S216-02 3280 2220
## 2 44638 OcnA-S116-01 3531 2672
## 3 47448 FrScA-S216-01 2870 1897
## 4 47979 OcnA-S216-01 4562 3090
## 5 48797 PhysA-S116-01 2207 1910
## 6 51943 FrScA-S216-03 4208 3596
## 7 52326 AnPhA-S216-01 4325 2255
## 8 52446 PhysA-S116-01 2086 1719
## 9 53447 FrScA-S116-01 4655 3149
## 10 53475 FrScA-S116-02 1710 1402
## # ℹ 593 more rows
## # ℹ 26 more variables: percentage_earned <dbl>, subject <chr>, semester <chr>,
## # section <chr>, Gradebook_Item <chr>, Grade_Category <lgl>,
## # FinalGradeCEMS <dbl>, Points_Possible <dbl>, Points_Earned <dbl>,
## # Gender <chr>, q1 <dbl>, q2 <dbl>, q3 <dbl>, q4 <dbl>, q5 <dbl>, q6 <dbl>,
## # q7 <dbl>, q8 <dbl>, q9 <dbl>, q10 <dbl>, TimeSpent <dbl>,
## # TimeSpent_hours <dbl>, TimeSpent_std <dbl>, int <dbl>, pc <dbl>, uv <dbl>
What do you notice about this data set? What do you wonder? Add one or two observations in the space below:
There are other ways to inspect your data; the glimpse()
function provides one such way. Run the code below to take a glimpse at
your data.
glimpse(sci_data)
## Rows: 603
## Columns: 30
## $ student_id <dbl> 43146, 44638, 47448, 47979, 48797, 51943, 52326,…
## $ course_id <chr> "FrScA-S216-02", "OcnA-S116-01", "FrScA-S216-01"…
## $ total_points_possible <dbl> 3280, 3531, 2870, 4562, 2207, 4208, 4325, 2086, …
## $ total_points_earned <dbl> 2220, 2672, 1897, 3090, 1910, 3596, 2255, 1719, …
## $ percentage_earned <dbl> 0.6768293, 0.7567261, 0.6609756, 0.6773345, 0.86…
## $ subject <chr> "FrScA", "OcnA", "FrScA", "OcnA", "PhysA", "FrSc…
## $ semester <chr> "S216", "S116", "S216", "S216", "S116", "S216", …
## $ section <chr> "02", "01", "01", "01", "01", "03", "01", "01", …
## $ Gradebook_Item <chr> "POINTS EARNED & TOTAL COURSE POINTS", "ATTEMPTE…
## $ Grade_Category <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ FinalGradeCEMS <dbl> 93.45372, 81.70184, 88.48758, 81.85260, 84.00000…
## $ Points_Possible <dbl> 5, 10, 10, 5, 438, 5, 10, 10, 443, 5, 12, 10, 5,…
## $ Points_Earned <dbl> NA, 10.00, NA, 4.00, 399.00, NA, NA, 10.00, 425.…
## $ Gender <chr> "M", "F", "M", "M", "F", "F", "M", "F", "F", "M"…
## $ q1 <dbl> 5, 4, 5, 5, 4, NA, 5, 3, 4, NA, NA, 4, 3, 5, NA,…
## $ q2 <dbl> 4, 4, 4, 5, 3, NA, 5, 3, 3, NA, NA, 5, 3, 3, NA,…
## $ q3 <dbl> 4, 3, 4, 3, 3, NA, 3, 3, 3, NA, NA, 3, 3, 5, NA,…
## $ q4 <dbl> 5, 4, 5, 5, 4, NA, 5, 3, 4, NA, NA, 5, 3, 5, NA,…
## $ q5 <dbl> 5, 4, 5, 5, 4, NA, 5, 3, 4, NA, NA, 5, 4, 5, NA,…
## $ q6 <dbl> 5, 4, 4, 5, 4, NA, 5, 4, 3, NA, NA, 5, 3, 5, NA,…
## $ q7 <dbl> 5, 4, 4, 4, 4, NA, 4, 3, 3, NA, NA, 5, 3, 5, NA,…
## $ q8 <dbl> 5, 5, 5, 5, 4, NA, 5, 3, 4, NA, NA, 4, 3, 5, NA,…
## $ q9 <dbl> 4, 4, 3, 5, NA, NA, 5, 3, 2, NA, NA, 5, 2, 2, NA…
## $ q10 <dbl> 5, 4, 5, 5, 3, NA, 5, 3, 5, NA, NA, 4, 4, 5, NA,…
## $ TimeSpent <dbl> 1555.1667, 1382.7001, 860.4335, 1598.6166, 1481.…
## $ TimeSpent_hours <dbl> 25.91944500, 23.04500167, 14.34055833, 26.643610…
## $ TimeSpent_std <dbl> -0.18051496, -0.30780313, -0.69325954, -0.148446…
## $ int <dbl> 5.0, 4.2, 5.0, 5.0, 3.8, 4.6, 5.0, 3.0, 4.2, NA,…
## $ pc <dbl> 4.50, 3.50, 4.00, 3.50, 3.50, 4.00, 3.50, 3.00, …
## $ uv <dbl> 4.333333, 4.000000, 3.666667, 5.000000, 3.500000…
We have one more question to pose to you: What do rows and columns typically represent in your area of work and/or research?
Generally, rows typically represent “cases,” the units that we measure, or the units on which we collect data. This is not a trick question! What counts as a “case” (and therefore what is represented as a row) varies by (and within) fields. There may be multiple types or levels of units studied in your field; listing more than one is fine! Also, please consider what columns - which usually represent variables - represent in your area of work and/or research.
What do rows typically (or you think may) represent:
What do columns typically (or you think may) represent:
Next, we’ll use a few functions that are handy for preparing data in table form.
By wrangle, we refer to the process of cleaning and processing data, and, in some cases, merging (or joining) data from multiple sources. Often, this part of the process is very (surprisingly) time-intensive! Wrangling your data into shape can itself be an important accomplishment! There are great tools in R to do this, especially through the use of the {dplyr} R package which is part of the tidyverse.
Let’s select only a few variables using a very powerful operator
called a pipe. Pipes are a powerful tool for combining
a sequence of functions or processes. The original pipe operator,
%>%, comes from the {magrittr} package
but all packages in the tidyverse load %>% for you
automatically, so you don’t usually load magrittr explicitly.
The pipe has become such a useful and much used operator in R that it
is now baked into R using the new and simpler version of the pipe
|> operator. Run the following code chunk to
select() the student_id,
total_points_possible, and total_points_earned
variables from our sci_data:
sci_data |>
select(student_id, total_points_possible, total_points_earned)
## # A tibble: 603 × 3
## student_id total_points_possible total_points_earned
## <dbl> <dbl> <dbl>
## 1 43146 3280 2220
## 2 44638 3531 2672
## 3 47448 2870 1897
## 4 47979 4562 3090
## 5 48797 2207 1910
## 6 51943 4208 3596
## 7 52326 4325 2255
## 8 52446 2086 1719
## 9 53447 4655 3149
## 10 53475 1710 1402
## # ℹ 593 more rows
Notice how the number of columns (variables) is now different.
Let’s include one additional variable in your select function.
First, we need to figure out what variables exist in our dataset (or be reminded of this - it’s very common in R to be continually checking and inspecting your data)!
Recall that you can use a function named glimpse() to do
this.
glimpse(sci_data)
## Rows: 603
## Columns: 30
## $ student_id <dbl> 43146, 44638, 47448, 47979, 48797, 51943, 52326,…
## $ course_id <chr> "FrScA-S216-02", "OcnA-S116-01", "FrScA-S216-01"…
## $ total_points_possible <dbl> 3280, 3531, 2870, 4562, 2207, 4208, 4325, 2086, …
## $ total_points_earned <dbl> 2220, 2672, 1897, 3090, 1910, 3596, 2255, 1719, …
## $ percentage_earned <dbl> 0.6768293, 0.7567261, 0.6609756, 0.6773345, 0.86…
## $ subject <chr> "FrScA", "OcnA", "FrScA", "OcnA", "PhysA", "FrSc…
## $ semester <chr> "S216", "S116", "S216", "S216", "S116", "S216", …
## $ section <chr> "02", "01", "01", "01", "01", "03", "01", "01", …
## $ Gradebook_Item <chr> "POINTS EARNED & TOTAL COURSE POINTS", "ATTEMPTE…
## $ Grade_Category <lgl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, …
## $ FinalGradeCEMS <dbl> 93.45372, 81.70184, 88.48758, 81.85260, 84.00000…
## $ Points_Possible <dbl> 5, 10, 10, 5, 438, 5, 10, 10, 443, 5, 12, 10, 5,…
## $ Points_Earned <dbl> NA, 10.00, NA, 4.00, 399.00, NA, NA, 10.00, 425.…
## $ Gender <chr> "M", "F", "M", "M", "F", "F", "M", "F", "F", "M"…
## $ q1 <dbl> 5, 4, 5, 5, 4, NA, 5, 3, 4, NA, NA, 4, 3, 5, NA,…
## $ q2 <dbl> 4, 4, 4, 5, 3, NA, 5, 3, 3, NA, NA, 5, 3, 3, NA,…
## $ q3 <dbl> 4, 3, 4, 3, 3, NA, 3, 3, 3, NA, NA, 3, 3, 5, NA,…
## $ q4 <dbl> 5, 4, 5, 5, 4, NA, 5, 3, 4, NA, NA, 5, 3, 5, NA,…
## $ q5 <dbl> 5, 4, 5, 5, 4, NA, 5, 3, 4, NA, NA, 5, 4, 5, NA,…
## $ q6 <dbl> 5, 4, 4, 5, 4, NA, 5, 4, 3, NA, NA, 5, 3, 5, NA,…
## $ q7 <dbl> 5, 4, 4, 4, 4, NA, 4, 3, 3, NA, NA, 5, 3, 5, NA,…
## $ q8 <dbl> 5, 5, 5, 5, 4, NA, 5, 3, 4, NA, NA, 4, 3, 5, NA,…
## $ q9 <dbl> 4, 4, 3, 5, NA, NA, 5, 3, 2, NA, NA, 5, 2, 2, NA…
## $ q10 <dbl> 5, 4, 5, 5, 3, NA, 5, 3, 5, NA, NA, 4, 4, 5, NA,…
## $ TimeSpent <dbl> 1555.1667, 1382.7001, 860.4335, 1598.6166, 1481.…
## $ TimeSpent_hours <dbl> 25.91944500, 23.04500167, 14.34055833, 26.643610…
## $ TimeSpent_std <dbl> -0.18051496, -0.30780313, -0.69325954, -0.148446…
## $ int <dbl> 5.0, 4.2, 5.0, 5.0, 3.8, 4.6, 5.0, 3.0, 4.2, NA,…
## $ pc <dbl> 4.50, 3.50, 4.00, 3.50, 3.50, 4.00, 3.50, 3.00, …
## $ uv <dbl> 4.333333, 4.000000, 3.666667, 5.000000, 3.500000…
In the code chunk below, add a new variable, being careful to type the new variable name as it appears in the data. We’ve added some code to get you started. Consider how the names of the other variables are separated as you think about how to add an additional variable to this code.
sci_data |>
select(student_id, total_points_possible, total_points_earned)
## # A tibble: 603 × 3
## student_id total_points_possible total_points_earned
## <dbl> <dbl> <dbl>
## 1 43146 3280 2220
## 2 44638 3531 2672
## 3 47448 2870 1897
## 4 47979 4562 3090
## 5 48797 2207 1910
## 6 51943 4208 3596
## 7 52326 4325 2255
## 8 52446 2086 1719
## 9 53447 4655 3149
## 10 53475 1710 1402
## # ℹ 593 more rows
Once added, the output should be different than in the code above - there should now be an additional variable included in the print-out.
Next, let’s explore filtering variables. Check out and run the next chunk of code, imagining that we wish to filter our data to view only the rows associated with students who earned a final grade (as a percentage) of 70 - 70% - or higher.
sci_data |>
filter(FinalGradeCEMS > 70)
## # A tibble: 438 × 30
## student_id course_id total_points_possible total_points_earned
## <dbl> <chr> <dbl> <dbl>
## 1 43146 FrScA-S216-02 3280 2220
## 2 44638 OcnA-S116-01 3531 2672
## 3 47448 FrScA-S216-01 2870 1897
## 4 47979 OcnA-S216-01 4562 3090
## 5 48797 PhysA-S116-01 2207 1910
## 6 52326 AnPhA-S216-01 4325 2255
## 7 52446 PhysA-S116-01 2086 1719
## 8 53447 FrScA-S116-01 4655 3149
## 9 53475 FrScA-S216-01 1209 977
## 10 54066 OcnA-S116-01 4641 3429
## # ℹ 428 more rows
## # ℹ 26 more variables: percentage_earned <dbl>, subject <chr>, semester <chr>,
## # section <chr>, Gradebook_Item <chr>, Grade_Category <lgl>,
## # FinalGradeCEMS <dbl>, Points_Possible <dbl>, Points_Earned <dbl>,
## # Gender <chr>, q1 <dbl>, q2 <dbl>, q3 <dbl>, q4 <dbl>, q5 <dbl>, q6 <dbl>,
## # q7 <dbl>, q8 <dbl>, q9 <dbl>, q10 <dbl>, TimeSpent <dbl>,
## # TimeSpent_hours <dbl>, TimeSpent_std <dbl>, int <dbl>, pc <dbl>, uv <dbl>
In the next code chunk, change the cut-off from 70% to some other value - larger or smaller (maybe much larger or smaller - feel free to play around with the code a bit!).
sci_data |>
filter(FinalGradeCEMS > 70)
## # A tibble: 438 × 30
## student_id course_id total_points_possible total_points_earned
## <dbl> <chr> <dbl> <dbl>
## 1 43146 FrScA-S216-02 3280 2220
## 2 44638 OcnA-S116-01 3531 2672
## 3 47448 FrScA-S216-01 2870 1897
## 4 47979 OcnA-S216-01 4562 3090
## 5 48797 PhysA-S116-01 2207 1910
## 6 52326 AnPhA-S216-01 4325 2255
## 7 52446 PhysA-S116-01 2086 1719
## 8 53447 FrScA-S116-01 4655 3149
## 9 53475 FrScA-S216-01 1209 977
## 10 54066 OcnA-S116-01 4641 3429
## # ℹ 428 more rows
## # ℹ 26 more variables: percentage_earned <dbl>, subject <chr>, semester <chr>,
## # section <chr>, Gradebook_Item <chr>, Grade_Category <lgl>,
## # FinalGradeCEMS <dbl>, Points_Possible <dbl>, Points_Earned <dbl>,
## # Gender <chr>, q1 <dbl>, q2 <dbl>, q3 <dbl>, q4 <dbl>, q5 <dbl>, q6 <dbl>,
## # q7 <dbl>, q8 <dbl>, q9 <dbl>, q10 <dbl>, TimeSpent <dbl>,
## # TimeSpent_hours <dbl>, TimeSpent_std <dbl>, int <dbl>, pc <dbl>, uv <dbl>
What happens when you change the cut-off from 70 to something else? Add a thought (or more):
The last function we’ll use for preparing tables is arrange.
We’ll combine this arrange() function with a function we
used already - select(). We do this so we can view only the
student ID and their final grade.
sci_data |>
select(student_id, FinalGradeCEMS) |>
arrange(FinalGradeCEMS)
## # A tibble: 603 × 2
## student_id FinalGradeCEMS
## <dbl> <dbl>
## 1 90995 0
## 2 92606 0.535
## 3 95684 0.903
## 4 90996 1.80
## 5 94876 2.93
## 6 92633 3.01
## 7 85390 3.06
## 8 94630 3.43
## 9 90995 5.04
## 10 96677 5.2
## # ℹ 593 more rows
Note that arrange works by sorting values in ascending order (from lowest to highest); you can change this by using the desc() function with arrange, like the following:
sci_data |>
select(student_id, FinalGradeCEMS) |>
arrange(desc(FinalGradeCEMS))
## # A tibble: 603 × 2
## student_id FinalGradeCEMS
## <dbl> <dbl>
## 1 85650 100
## 2 91067 99.8
## 3 66740 99.3
## 4 86792 99.1
## 5 78153 99.0
## 6 66689 98.6
## 7 88261 98.6
## 8 92740 98.6
## 9 92726 98.2
## 10 92741 98.2
## # ℹ 593 more rows
In the code chunk below, replace FinalGradeCEMS that is used with both the select() and arrange() functions with a different variable in the data set. Consider returning to the code chunk above in which you glimpsed at the names of all of the variables.
sci_data |>
select(student_id, FinalGradeCEMS) |>
arrange(desc(FinalGradeCEMS))
## # A tibble: 603 × 2
## student_id FinalGradeCEMS
## <dbl> <dbl>
## 1 85650 100
## 2 91067 99.8
## 3 66740 99.3
## 4 86792 99.1
## 5 78153 99.0
## 6 66689 98.6
## 7 88261 98.6
## 8 92740 98.6
## 9 92726 98.2
## 10 92741 98.2
## # ℹ 593 more rows
Can you compose a series of functions that include the select(), filter(), and arrange functions? Recall that you can “pipe” the output from one function to the next as when we used select() and arrange() together in the code chunk above.
This reach is not required/necessary to complete; it’s just for those who wish to do a bit more with these functions at this time (we’ll do more in our learning labs , too!)
Exploratory data analysis, or exploring your data, involves processes of describing your data (such as by calculating the means and standard deviations of numeric variables, or counting the frequency of categorical variables) and, often, visualizing your data prior to analysis. As we’ll learn in later labs, the explore phase can also involve the process of “feature engineering” or creating new variables within a dataset (Krumm, Means, and Bienkowski 2018). In this section, we’ll quickly pull together some basic stats using a handy function from the {skimr} package, and introduce you to a basic data visualization “code template” for the {ggplot} package from the tidyverse.
Let’s repurpose what we learned from our wrangle section to select
just a few variables and quickly gather some descriptive stats using the
skim() function from the {skimr} package.
sci_data |>
select(course_id, FinalGradeCEMS) |>
skim()
| Name | select(sci_data, course_i… |
| Number of rows | 603 |
| Number of columns | 2 |
| _______________________ | |
| Column type frequency: | |
| character | 1 |
| numeric | 1 |
| ________________________ | |
| Group variables | None |
Variable type: character
| skim_variable | n_missing | complete_rate | min | max | empty | n_unique | whitespace |
|---|---|---|---|---|---|---|---|
| course_id | 0 | 1 | 12 | 13 | 0 | 26 | 0 |
Variable type: numeric
| skim_variable | n_missing | complete_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
|---|---|---|---|---|---|---|---|---|---|---|
| FinalGradeCEMS | 30 | 0.95 | 77.2 | 22.23 | 0 | 71.25 | 84.57 | 92.1 | 100 | ▁▁▁▃▇ |
Use the code from the chunk from above to explore some other
variables of interest from our sci_data.
sci_data |>
select(course_id, FinalGradeCEMS) |>
skim()
| Name | select(sci_data, course_i… |
| Number of rows | 603 |
| Number of columns | 2 |
| _______________________ | |
| Column type frequency: | |
| character | 1 |
| numeric | 1 |
| ________________________ | |
| Group variables | None |
Variable type: character
| skim_variable | n_missing | complete_rate | min | max | empty | n_unique | whitespace |
|---|---|---|---|---|---|---|---|
| course_id | 0 | 1 | 12 | 13 | 0 | 26 | 0 |
Variable type: numeric
| skim_variable | n_missing | complete_rate | mean | sd | p0 | p25 | p50 | p75 | p100 | hist |
|---|---|---|---|---|---|---|---|---|---|---|
| FinalGradeCEMS | 30 | 0.95 | 77.2 | 22.23 | 0 | 71.25 | 84.57 | 92.1 | 100 | ▁▁▁▃▇ |
What happens if simply feed the skim function the entire
sci_data object? Give it a try!
Data visualization is an extremely common practice in Learning Analytics, especially in the use of data dashboards, and involves graphically representing one or more variables with the goal of discovering patterns in data. These patterns may help us generate questions about our data, discover relationships between and among variables, and create or select features for data modeling.
In this section we’ll focus on using a basic code template for the
{ggplot2} package from the tidyverse. ggplot2 is a system
for declaratively creating graphics, based on The
Grammar of Graphics. You provide the data, tell ggplot2 how to map
variables to aesthetics, what graphical elements to use, and it takes
care of the details.
At it’s core, you can create some very simple but attractive graphs with just a couple lines of code. {ggplot2} follows the common workflow for making graphs with ggplot2. To make a graph, you:
Start the graph with ggplot() and include your data
as an argument;
“Add” elements to the graph using the + operator
a geom_()
function;
Select variables to graph on each axis with the
aes() argument.
Let’s give it a try by creating a simple histogram of our
FinalGradeCEMS variable. The code below creates a
histogram, or a distribution of the values, in this case for students’
final grades. Go ahead and run it:
ggplot(sci_data) +
geom_histogram(aes(x = FinalGradeCEMS))
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 30 rows containing non-finite values (`stat_bin()`).
Note that the first function, ggplot(), creates a
coordinate system that you can “add” layers to using additional
functions and + operator. The first argument of
ggplot() is the dataset, in our case sci_data,
to use for the graph.
By itself, ggplot(data = mpg) just creates an empty
graph. But when you add a required geom_() function like
geom_histogram(), you tell it which type of graph you want
to make, in our case a histogram. A geom is the
geometrical object that a plot uses to represent observations. People
often describe plots by the type of geom that the plot uses. For
example, bar charts use bar geoms, line charts use line geoms, boxplots
use boxplot geoms, and so on. Scatterplots, which we’ll see a in bit,
break the trend; they use the point geom.
The final required element for any graph is a mapping =
argument that defines which variables in your dataset are mapped to
which axes in your graph. The mapping argument is always
paired with the function aes(), which you use to gather
together all of the mappings that you want to create. In our case, since
we just created a simple histogram, we only had to specify what variable
to place on the x axis, which in our case was
FinalGradeCEMS.
We won’t spend a lot of time on it in this case study, but you can
also change the color of the histogram bars by adding an additional
argument to specify color. Let’s give that a try using the
fill = argument:
ggplot(sci_data) +
geom_histogram(aes(x = FinalGradeCEMS), fill = "blue")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 30 rows containing non-finite values (`stat_bin()`).
Now use the code chunk below to visualize the distribution of another
variable in the data, TimeSpent. You can do so by swapping
out the variable FinalGradeCEMS with our new variable.
Also, change the color to one of your choosing; consider this list of
valid color names here: http://www.stat.columbia.edu/~tzheng/files/Rcolor.pdf
ggplot(sci_data) +
geom_histogram(aes(x = TimeSpent), fill = "green")
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
## Warning: Removed 5 rows containing non-finite values (`stat_bin()`).
FInally, let’s creating a scatter plot for the relationship between
these two variables. Scatterplots use the point geom, i.e., the
geom_point() function, and are most useful for displaying
the relationship between two continuous variables.
Complete the code chunk below to create a simple scatterplot with
TimeSpent on the x axis and FinalGradeCEMS on the y
axis:
ggplot(sci_data) +
geom_point(aes(x = TimeSpent, y = FinalGradeCEMS))
Well done! As you can see, there appears to be a positve relationship between the time students spend in the online course and their final grade!
“Model” is one of those terms that has many different meanings. For our purpose, we refer to the process of simplifying and summarizing our data. Thus, models can take many forms; calculating means represents a legitimate form of modeling data, as does estimating more complex models, including linear regressions, and models and algorithms associated with machine learning tasks. For now, we’ll run a linear regression to predict students’ final grades.
We’ll dive much deeper into modeling in subsequent learning labs, but
for now let’s see if we can predict students’ final grades
(FinaGradeCEMS, which is on a 0-100 point scale) on the
basis of the time they spent on the course (measured through their
learning management system (LMS) in minutes, TimeSpent, and
the subject (one of five) of their specific course.
m1 <- lm(FinalGradeCEMS ~ TimeSpent + subject, data = sci_data)
summary(m1)
##
## Call:
## lm(formula = FinalGradeCEMS ~ TimeSpent + subject, data = sci_data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -70.378 -8.836 4.816 12.855 36.047
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 57.3931739 2.3382193 24.546 < 2e-16 ***
## TimeSpent 0.0071098 0.0006516 10.912 < 2e-16 ***
## subjectBioA -1.5596482 3.6053075 -0.433 0.665
## subjectFrScA 11.7306546 2.2143847 5.297 1.68e-07 ***
## subjectOcnA 1.0974545 2.5771474 0.426 0.670
## subjectPhysA 16.0357213 3.0712923 5.221 2.50e-07 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 19.8 on 567 degrees of freedom
## (30 observations deleted due to missingness)
## Multiple R-squared: 0.213, Adjusted R-squared: 0.2061
## F-statistic: 30.69 on 5 and 567 DF, p-value: < 2.2e-16
It looks like TimeSpent and the
subjects FrSc—forensic science—and
PhysA—Physics—are associated with a higher final grade.
This indicates that students in those two classes earned higher grades
than students in other science classes in this dataset, and those who
spent more time in the LMS also earned higher grades.
Notice how above the variables are separated by a + symbol. Below,
add another - a third - variable to the regression model.
Specifically, add a variable students’ initial, self-reported interest
in science, int - and any other variable(s) you like! What
do you notice about the results? We’re going to dive into this
much more: if you have many questions now, you’re in the right
spot!
m2 <- lm(FinalGradeCEMS ~ TimeSpent + subject + int, data = sci_data)
summary(m2)
##
## Call:
## lm(formula = FinalGradeCEMS ~ TimeSpent + subject + int, data = sci_data)
##
## Residuals:
## Min 1Q Median 3Q Max
## -69.780 -8.642 4.475 12.979 39.054
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 58.0359984 6.9603713 8.338 7.50e-16 ***
## TimeSpent 0.0073816 0.0006982 10.572 < 2e-16 ***
## subjectBioA -0.7474948 3.9200300 -0.191 0.849
## subjectFrScA 13.6893765 2.4119882 5.676 2.35e-08 ***
## subjectOcnA 3.7050669 2.7493908 1.348 0.178
## subjectPhysA 18.4112956 3.2564035 5.654 2.65e-08 ***
## int -0.7534456 1.5064400 -0.500 0.617
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 19.94 on 496 degrees of freedom
## (100 observations deleted due to missingness)
## Multiple R-squared: 0.2289, Adjusted R-squared: 0.2196
## F-statistic: 24.54 on 6 and 496 DF, p-value: < 2.2e-16
The final step in the workflow/process is sharing the results of your analysis with wider audience. Krumm et al. (2018) have outlined the following 3-step process for communicating with education stakeholders findings from an analysis:
Select. Communicating what one has learned involves selecting among those analyses that are most important and most useful to an intended audience, as well as selecting a form for displaying that information, such as a graph or table in static or interactive form, i.e. a “data product.”
Polish. After creating initial versions of data products, research teams often spend time refining or polishing them, by adding or editing titles, labels, and notations and by working with colors and shapes to highlight key points.
Narrate. Writing a narrative to accompany the data products involves, at a minimum, pairing a data product with its related research question, describing how best to interpret the data product, and explaining the ways in which the data product helps answer the research question.
In later Learning Labs, you will have an opportunity to create a simple “data product” designed to illustrate insights some insights gained from your analysis and ideally highlight an “action step” that can be taken to act upon your findings.
For now, we will wrap up this case study by converting our work into a webpage that can be used to communicate your learning and demonstrate some of your new R skills. To do so, you will need to “knit” your document by clicking the knit button next to the yarn ball in the menu bar at that the top of this file. This will do two things; it will:
check through all your code for any errors; and,
create a file in your directory that you can use to share you work through Posit Cloud (see screenshot example below to publish), RPubs , GitHub Pages, Quarto Pub, or other methods.
Complete the following steps to submit your work for review:
First, change the name of the author: in the YAML
header at the very top of this document to your name. The YAML
header controls the style and feel for knitted document but doesn’t
actually display in the final output.
Next, click the
button in the toolbar above to “knit” your R Markdown document to a HTML
file that will be saved in your R Project folder. You should see a
formatted webpage appear in your Viewer tab in the lower right pan or in
a new browser window. Let’s us know if you run into any issues with
knitting.
Finally, publish your webpage on on Posit Cloud by clicking the “Publish” button located in the Viewer Pane when you knit your document. See screenshot below.
Congratulations, you’ve completed your first case study! To recieve credit for this assignment and earn your first LASER Badge, share the link to published webpage under the Badge 1 Artifact column on the 2023 LASER Scholar Information and Documents spreadsheet: https://go.ncsu.edu/laser-sheet.
Once your instructor has checked your link, you will be provide a physical version of the badge below!